DE eng

Search in the Catalogues and Directories

Hits 1 – 19 of 19

1
Knowledge Distillation for Quality Estimation ...
BASE
Show details
2
Knowledge Distillation for Quality Estimation ...
BASE
Show details
3
Controllable Text Simplification with Explicit Paraphrasing ...
NAACL 2021 2021; Alva-Manchego, Fernando; Maddela, Mounica. - : Underline Science Inc., 2021
BASE
Show details
4
The (Un)Suitability of Automatic Evaluation Metrics for Text Simplification ...
BASE
Show details
5
Knowledge distillation for quality estimation
Gajbhiye, Amit; Fomicheva, Marina; Alva-Manchego, Fernando. - : Association for Computational Linguistics, 2021
BASE
Show details
6
deepQuest-py: large and distilled models for quality estimation
Alva-Manchego, Fernando; Obamuyide, Abiola; Gajbhiye, Amit. - : Association for Computational Linguistics, 2021
BASE
Show details
7
IAPUCP at SemEval-2021 task 1: Stacking fine-tuned transformers is almost all you need for lexical complexity prediction
Rivas Rojas, Kervy; Alva-Manchego, Fernando. - : Association for Computational Linguistics, 2021
BASE
Show details
8
The (un)suitability of automatic evaluation metrics for text simplification
Alva Manchego, Fernando; Scarton, Carolina; Specia, Lucia. - : Association for Computational Linguistics, 2021
BASE
Show details
9
Controllable text simplification with explicit paraphrasing
Maddela, Mounica; Alva-Manchego, Fernando; Xu, Wei. - : Association for Computational Linguistics, 2021
BASE
Show details
10
deepQuest-py: large and distilled models for quality estimation
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations ; 382 ; 389 (2021)
BASE
Show details
11
Knowledge distillation for quality estimation
In: 5091 ; 5099 (2021)
BASE
Show details
12
ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations
In: ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics ; https://hal.inria.fr/hal-02889823 ; ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics, Jul 2020, Seattle / Virtual, United States (2020)
BASE
Show details
13
Controllable Text Simplification with Explicit Paraphrasing ...
BASE
Show details
14
ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations ...
BASE
Show details
15
ASSET: A dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations
BASE
Show details
16
Data-Driven Sentence Simplification: Survey and Benchmark
In: Computational Linguistics, Vol 46, Iss 1, Pp 135-187 (2020) (2020)
BASE
Show details
17
Automatic Sentence Simplification with Multiple Rewriting Transformations
Abstract: Sentence Simplification aims to rewrite a sentence in order to make it easier to read and understand, while preserving as much as possible of its original meaning. In order to do so, human editors perform several text transformations, such as replacing complex terms by simpler synonyms, reordering words or phrases, removing non-essential information, and splitting long sentences. However, executing these rewriting operations automatically while keeping sentences grammatical, preserving their main idea, and generating simpler output, is a challenging and still far from solved problem. Considering that simplifications produced by humans encompass a variety of text transformations, we should expect automatic simplifications to be produced in a similar fashion. However, current data-driven models for the task leverage datasets that do not necessarily contain training instances that exhibit this variety of operations. As such, they tend to copy most of the original content, with only small changes focused on lexical paraphrasing. Furthermore, it is unclear whether this implicit learning of multi-operation simplifications results in automatic outputs with such characteristics, since current automatic evaluation resources (i.e. metrics and test sets) focus on single-operation simplifications. In this Thesis, we tackle these limitations in Sentence Simplification research in four aspects. First, we develop novel annotation algorithms that are able to identify the simplification operations that were performed by automatic models at word, phrase and sentence levels. We propose to use these algorithms in an operation-based error analysis method, that measures the correctness of executing specific operations based on reference simplifications. This functionality is incorporated into EASSE, our new software package for standard automatic evaluation of simplification systems. We use EASSE to benchmark several simplification systems, and show that our proposed operation-based error analysis serves to better understand the scores computed using automatic metrics. Second, we introduce ASSET, a new multi-reference dataset for tuning and evaluation of Sentence Simplification models. Reference simplifications in ASSET were produced by human editors applying multiple rewriting transformations. We show that simplifications in ASSET offer more variability than other commonly-used evaluation datasets. In addition, we perform a human evaluation study that demonstrates that multi-operation simplifications are judged simpler than single-operation ones. We also motivate the need to develop new metrics suitable for multi-operation simplification assessment, since we show that judgements on simplicity do not have strong correlations with commonly-used multi-reference metrics computed using multi-operation simplification references. Third, we carry out the first meta-evaluation of automatic evaluation metrics in Sentence Simplification. We collect a new more reliable dataset for evaluating the behaviour of metrics and human judgements of simplicity. We use this data (and other existing datasets) to analyse the variation of the correlation of automatic metrics and simplicity judgements across three dimensions: the perceived simplicity level, the system type and the set of references used for computation. We show that these three aspects affect the correlations and, in particular, highlight the limitations of commonly-used simplification-specific metrics. Based on our findings, we elaborate a set of recommendations for automatic evaluation of multi-operation simplification, indicating which metrics to compute and how to interpret their scores. Finally, we implement MulTSS, a multi-operation Sentence Simplification model based on a multi-task learning architecture. We leverage training data from related text rewriting tasks (lexical paraphrasing, extractive compression and split-and-rephrase) to enhance the multi-operation capabilities of a standard simplification model. We show that our multi-task approach can generate better simplifications than strong single-task and pipeline baselines.
URL: https://etheses.whiterose.ac.uk/28690/
BASE
Hide details
18
Distributed knowledge based clinical auto-coding system
Kaur, Rajvir (S33301). - : U.S., Association for Computational Linguistics, 2019
BASE
Show details
19
Towards semi-supervised Brazilian Portuguese semantic role labeling: Building a benchmark
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
19
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern